17 research outputs found

    Index Reduction for Differential-Algebraic Equations with Mixed Matrices

    Full text link
    Differential-algebraic equations (DAEs) are widely used for modeling of dynamical systems. The difficulty in solving numerically a DAE is measured by its differentiation index. For highly accurate simulation of dynamical systems, it is important to convert high-index DAEs into low-index DAEs. Most of existing simulation software packages for dynamical systems are equipped with an index-reduction algorithm given by Mattsson and S\"{o}derlind. Unfortunately, this algorithm fails if there are numerical cancellations. These numerical cancellations are often caused by accurate constants in structural equations. Distinguishing those accurate constants from generic parameters that represent physical quantities, Murota and Iri introduced the notion of a mixed matrix as a mathematical tool for faithful model description in structural approach to systems analysis. For DAEs described with the use of mixed matrices, efficient algorithms to compute the index have been developed by exploiting matroid theory. This paper presents an index-reduction algorithm for linear DAEs whose coefficient matrices are mixed matrices, i.e., linear DAEs containing physical quantities as parameters. Our algorithm detects numerical cancellations between accurate constants, and transforms a DAE into an equivalent DAE to which Mattsson--S\"{o}derlind's index-reduction algorithm is applicable. Our algorithm is based on the combinatorial relaxation approach, which is a framework to solve a linear algebraic problem by iteratively relaxing it into an efficiently solvable combinatorial optimization problem. The algorithm does not rely on symbolic manipulations but on fast combinatorial algorithms on graphs and matroids. Furthermore, we provide an improved algorithm under an assumption based on dimensional analysis of dynamical systems.Comment: A preliminary version of this paper is to appear in Proceedings of the Eighth SIAM Workshop on Combinatorial Scientific Computing, Bergen, Norway, June 201

    Data-Driven Projection for Reducing Dimensionality of Linear Programs: Generalization Bound and Learning Methods

    Full text link
    This paper studies a simple data-driven approach to high-dimensional linear programs (LPs). Given data of past nn-dimensional LPs, we learn an n×kn\times k \textit{projection matrix} (n>kn > k), which reduces the dimensionality from nn to kk. Then, we address future LP instances by solving kk-dimensional LPs and recovering nn-dimensional solutions by multiplying the projection matrix. This idea is compatible with any user-preferred LP solvers, hence a versatile approach to faster LP solving. One natural question is: how much data is sufficient to ensure the recovered solutions' quality? We address this question based on the idea of \textit{data-driven algorithm design}, which relates the amount of data sufficient for generalization guarantees to the \textit{pseudo-dimension} of performance metrics. We present an O~(nk2)\tilde{\mathrm{O}}(nk^2) upper bound on the pseudo-dimension (O~\tilde{\mathrm{O}} compresses logarithmic factors) and complement it by an Ω(nk)\Omega(nk) lower bound, hence tight up to an O~(k)\tilde{\mathrm{O}}(k) factor. On the practical side, we study two natural methods for learning projection matrices: PCA- and gradient-based methods. While the former is simple and efficient, the latter sometimes leads to better solution quality. Experiments confirm that learned projection matrices are beneficial for reducing the time for solving LPs while maintaining high solution quality

    Faster Discrete Convex Function Minimization with Predictions: The M-Convex Case

    Full text link
    Recent years have seen a growing interest in accelerating optimization algorithms with machine-learned predictions. Sakaue and Oki (NeurIPS 2022) have developed a general framework that warm-starts the L-convex function minimization method with predictions, revealing the idea's usefulness for various discrete optimization problems. In this paper, we present a framework for using predictions to accelerate M-convex function minimization, thus complementing previous research and extending the range of discrete optimization algorithms that can benefit from predictions. Our framework is particularly effective for an important subclass called laminar convex minimization, which appears in many operations research applications. Our methods can improve time complexity bounds upon the best worst-case results by using predictions and even have potential to go beyond a lower-bound result

    Algebraic combinatorial optimization on the degree of determinants of noncommutative symbolic matrices

    Full text link
    We address the computation of the degrees of minors of a noncommutative symbolic matrix of form A[c]:=∑k=1mAktckxk, A[c] := \sum_{k=1}^m A_k t^{c_k} x_k, where AkA_k are matrices over a field K\mathbb{K}, xix_i are noncommutative variables, ckc_k are integer weights, and tt is a commuting variable specifying the degree. This problem extends noncommutative Edmonds' problem (Ivanyos et al. 2017), and can formulate various combinatorial optimization problems. Extending the study by Hirai 2018, and Hirai, Ikeda 2022, we provide novel duality theorems and polyhedral characterization for the maximum degrees of minors of A[c]A[c] of all sizes, and develop a strongly polynomial-time algorithm for computing them. This algorithm is viewed as a unified algebraization of the classical Hungarian method for bipartite matching and the weight-splitting algorithm for linear matroid intersection. As applications, we provide polynomial-time algorithms for weighted fractional linear matroid matching and linear optimization over rank-2 Brascamp-Lieb polytopes

    On Solving (Non)commutative Weighted Edmonds\u27 Problem

    No full text
    corecore